|
This article discusses the analysis of parallel algorithms. Like in the analysis of "ordinary", sequential, algorithms, one is typically interested in asymptotic bounds on the resource consumption (mainly time spent computing), but the analysis is performed in the presence of multiple processor units that cooperate to perform computations. Thus, one can determine not only how many "steps" a computation takes, but also how much faster it becomes as the number of processors goes up. ==Overview== Suppose computations are executed on a machine that has processors. Let denote the time that expires between the start of the computation and its end. Analysis of the computation's running time focuses on the following notions: * The ''work'' of a computation executed by processors is the total number of primitive operations that the processors perform. Ignoring communication overhead from synchronizing the processors, this is equal to the time used to run the computation on a single processor, denoted . * The ''span'' is the length of the longest series of operations that have to be performed sequentially due to data dependencies (the ''critical path''). The span may also be called the ''critical path length'' or the ''depth'' of the computation. Minimizing the span is important in designing parallel algorithms, because the span determines the shortest possible execution time. Alternatively, the span can be defined as the time spent computing using an idealized machine with an infinite number of processors. * The ''cost'' of the computation is the quantity . This expresses the total time spent, by all processors, in both computing and waiting.〔 Several useful results follow from the definitions of work, span and cost: * ''Work law''. The cost is always at least the work: . This follows from the fact that processors can perform at most operations in parallel.〔〔 * ''Span law''. A finite number of processors cannot outperform an infinite number, so that .〔 Using these definitions and laws, the following measures of performance can be given: * ''Speedup'' is the gain in speed made by parallel execution compared to sequential execution: . When the speedup is for input size (using big O notation), the speedup is linear, which is optimal in simple models of computation because the work law implies that (super-linear speedup can occur in practice due to memory hierarchy effects). The situation is called perfect linear speedup.〔 An algorithm that exhibits linear speedup is said to be scalable.〔 * ''Efficiency'' is the speedup per processor, .〔 * ''Parallelism'' is the ratio . It represents the maximum possible speedup on any number of processors. By the span law, the parallelism bounds the speedup: if , then .〔 * The ''slackness'' is . A slackness less than one implies (by the span law) that perfect linear speedup is impossible on processors.〔 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「analysis of parallel algorithms」の詳細全文を読む スポンサード リンク
|